Discussing the "Real" Expectations
Understanding the expectations users have for serverless computing.
Running applications locally#
What do we expect from serverless, or for that matter, any type of deployment services?
We expect to be able to develop our applications locally and to run them in clusters. Ideally, local environments would be the same as clusters, but that’s not critical. It’s okay if they’re similar. As long as we can easily spin up the application we’re working on together with the direct dependencies, we should be able to work locally. Our laptops tend to have quite a few processors and gigabytes of memory, so why not use them? That doesn’t mean we exclude development that entirely relies on servers in the cloud, but instead, that we believe the ability to work locally is still essential. That might change in the future, but that future is still not here.
We want to develop and run applications locally before deploying them to other “real” environments. It doesn’t matter much whether those applications are monoliths, microservices, or functions. Similarly, it should be irrelevant whether they’ll be deployed as serverless, as Kubernetes resources, by running as processes on servers, or anything else. We want to be able to develop locally, no matter what that something is.
Common denominator#
We also need a common denominator before we switch to higher-level specific implementations. Today, that common denominator is a container image. We might argue whether we should run on bare-metal or VMs, whether we should deploy to servers or clusters, whether we should use a scheduler or not, and so on and so forth. One of the things that almost no one argues anymore is that our applications should be packaged as container images.
That’s the highest common denominator we have today. It doesn’t matter whether we use Docker, Docker Compose, Mesos, Kubernetes, a service that doesn’t provide visibility to what’s below it, or anything else. What matters is that it’s always based on container images. We can even convert those into VMs and skip running containers altogether. Container images are a universal packaging mechanism for our applications.
Note: Container images are not a common denominator. There are still those using mainframe. We’ll ignore them. There are also those developing for macOS—they’re the exception that proves the rule.
Container images are so beneficial and commonly used that we want to say, “Here’s my image, run it.” The primary question is whether that should be done by executing docker-compose, kubectl, or something else. There’s nothing necessarily wrong in adding additional layers of abstraction if that results in elevation of some of the complexities.
Maintaining standards#
Then there’s the emergence of standards. We can say that having a standard in an area of software engineering is the sign of maturity. Such standards are often de facto, and not something decided by a few people. One such standard is container images and container runners. No matter which tool we’re using to build a container image or run containers, most use the same formats and the same API. Standards often emerge when a sufficient number of people use something for a sufficient period. That doesn’t mean standards are something that everyone uses but rather that the adoption is so high that we can say that the majority is using it.
So, we want to have some sort of a standard and let service providers compete on top of it. We don’t want to be locked more than necessary. That’s why we love Kubernetes. It provides a common API that is, more or less, the same, no matter who is in charge of it, or where it’s running. It doesn’t matter whether Kubernetes is running in AWS, Google, Azure, DigitalOcean, Linode, on your own datacenter, or anywhere else. It’s the same API. We can learn it, and we can have the confidence that we can use that knowledge no matter where we work, or where our servers are running. Can we have something similar for serverless deployments? Can’t we get a common API, and let service vendors compete on top of it with lower prices, more reliable service, additional features, or any other way they see fit?
Restrictions with serverless deployment#
Then there’s the issue with restrictions. They’re unavoidable. There’s no such thing as an unlimited and unrestricted platform. Still, some of the limitations are acceptable, while others are not:
-
We don’t want anyone to tell us which languages to use to write applications. That doesn’t mean that we don’t want to accept advice or admit that some are better than others. Still, we don’t want to be constrained either.
-
If we feel that Rust is the right choice for a given task, we want to use it. The platform we use to deploy our applications shouldn’t dictate which language we use. It can “suggest” that something is a better choice than something else but not restrict our creativity.
To put it bluntly, it shouldn’t matter which language we use to write applications.
-
We also might want to choose the level of involvement we want to have. For example, having a single replica of an application for each request might fit some use cases. But there can be (and usually are) situations where we might want to serve up to one thousand concurrent requests with a single replica. That can’t be a decision left up to the platform where the application is running. It’s part of the architecture of an application as well.
-
We believe that the number of choices given to users by serverless service providers must be restricted. It can’t be limited only by our imaginations. Nevertheless, there should be a healthy balance between simplicity, reliability, and freedom to tweak a service to meet specific use cases’ goals.
-
Then there’s the issue of types of applications. Functions are great, but they aren’t the solution to all the problems in the universe. For some use cases, microservices are a better fit, while in others, we might be better off with monoliths. Should we be restricted to functions when performing serverless deployments? Is there a “serverless manifesto” that says that it must be a function?
Note: We're fully aware that some types of applications are better candidates to be serverless than others. That isn't much different than, let’s say, Kubernetes. Some applications benefit more from running in Kubernetes than others. Still, it should be our decision for what applications go where.
We want to be able to leverage serverless deployments for applications, no matter their size, or even whether they are stateless or stateful. We want to give someone else the job of provisioning and managing the infrastructure and to take care of the scaling and make the applications highly available. That allows us to focus on our core businesses and deliver that next killer feature as fast as possible.
Requirements for Serverless Computing
Summary: Serverless as the Future